40 research outputs found

    Magnification-independent Histopathological Image Classification with Similarity-based Multi-scale Embeddings

    Full text link
    The classification of histopathological images is of great value in both cancer diagnosis and pathological studies. However, multiple reasons, such as variations caused by magnification factors and class imbalance, make it a challenging task where conventional methods that learn from image-label datasets perform unsatisfactorily in many cases. We observe that tumours of the same class often share common morphological patterns. To exploit this fact, we propose an approach that learns similarity-based multi-scale embeddings (SMSE) for magnification-independent histopathological image classification. In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets. The learned embeddings provide accurate measurements of similarities between images, which are regarded as a more effective form of representation for histopathological morphology than normal image features. Furthermore, in order to ensure the generated models are magnification-independent, images acquired at different magnification factors are simultaneously fed to networks during training for learning multi-scale embeddings. In addition to the SMSE, to eliminate the impact of class imbalance, instead of using the hard sample mining strategy that intuitively discards some easy samples, we introduce a new reinforced focal loss to simultaneously punish hard misclassified samples while suppressing easy well-classified samples. Experimental results show that the SMSE improves the performance for histopathological image classification tasks for both breast and liver cancers by a large margin compared to previous methods. In particular, the SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods using traditional features

    POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation

    Get PDF
    Intravascular ultrasound (IVUS) is recommended in guiding coronary intervention. The segmentation of coronary lumen and external elastic membrane (EEM) borders in IVUS images is a key step, but the manual process is time-consuming and error-prone, and suffers from inter-observer variability. In this paper, we propose a novel perceptual organisation-aware selective transformer framework that can achieve accurate and robust segmentation of the vessel walls in IVUS images. In this framework, temporal context-based feature encoders extract efficient motion features of vessels. Then, a perceptual organisation-aware selective transformer module is proposed to extract accurate boundary information, supervised by a dedicated boundary loss. The obtained EEM and lumen segmentation results will be fused in a temporal constraining and fusion module, to determine the most likely correct boundaries with robustness to morphology. Our proposed methods are extensively evaluated in non-selected IVUS sequences, including normal, bifurcated, and calcified vessels with shadow artifacts. The results show that the proposed methods outperform the state-of-the-art, with a Jaccard measure of 0.92 for lumen and 0.94 for EEM on the IVUS 2011 open challenge dataset. This work has been integrated into a software QCU-CMS2 to automatically segment IVUS images in a user-friendly environment

    CARDIAN: a novel computational approach for real-time end-diastolic frame detection in intravascular ultrasound using bidirectional attention networks

    Get PDF
    IntroductionChanges in coronary artery luminal dimensions during the cardiac cycle can impact the accurate quantification of volumetric analyses in intravascular ultrasound (IVUS) image studies. Accurate ED-frame detection is pivotal for guiding interventional decisions, optimizing therapeutic interventions, and ensuring standardized volumetric analysis in research studies. Images acquired at different phases of the cardiac cycle may also lead to inaccurate quantification of atheroma volume due to the longitudinal motion of the catheter in relation to the vessel. As IVUS images are acquired throughout the cardiac cycle, end-diastolic frames are typically identified retrospectively by human analysts to minimize motion artefacts and enable more accurate and reproducible volumetric analysis.MethodsIn this paper, a novel neural network-based approach for accurate end-diastolic frame detection in IVUS sequences is proposed, trained using electrocardiogram (ECG) signals acquired synchronously during IVUS acquisition. The framework integrates dedicated motion encoders and a bidirectional attention recurrent network (BARNet) with a temporal difference encoder to extract frame-by-frame motion features corresponding to the phases of the cardiac cycle. In addition, a spatiotemporal rotation encoder is included to capture the IVUS catheter's rotational movement with respect to the coronary artery.ResultsWith a prediction tolerance range of 66.7 ms, the proposed approach was able to find 71.9%, 67.8%, and 69.9% of end-diastolic frames in the left anterior descending, left circumflex and right coronary arteries, respectively, when tested against ECG estimations. When the result was compared with two expert analysts’ estimation, the approach achieved a superior performance.DiscussionThese findings indicate that the developed methodology is accurate and fully reproducible and therefore it should be preferred over experts for end-diastolic frame detection in IVUS sequences

    Intelligent Campus System Design Based on Digital Twin

    No full text
    Amid the COVID-19 pandemic, prevention and control measures became normalized, prompting the development of campuses from digital to intelligent, eventually evolving to become wise. Current cutting-edge technologies include big data, Internet of Things, cloud computing, and artificial intelligence drive campus innovation, but there are still problems of unintuitive scenes, lagging monitoring information, untimely processing, and high operation and maintenance costs. Based on this, this study proposes the use of digital twin technology to digitally construct the physical campus scene, fully digitally represent it, accurately map the physical campus to the virtual campus with real-time sensing, and remotely control it to achieve the reverse control of the twin virtual campus to the physical campus. The research is guided by the theoretical model proposed by the digital twin technology, using UAV tilt photography and 3D modelling to collaboratively build the virtual campus scene. At the design stage, the interactive channel of the system is developed based on Unity3D to the realize real-time monitoring, decision making and prevention of dual spatial data. A design scheme of the spiral optimization system life cycle is formed. The modules of the smart campus system were evaluated using a system usability scale based on student experience. The experimental results show that the virtual-real campus system can enhance school management and teaching, providing important implications for promoting the development and application of campus intelligent systems

    Intelligent Campus System Design Based on Digital Twin

    No full text
    Amid the COVID-19 pandemic, prevention and control measures became normalized, prompting the development of campuses from digital to intelligent, eventually evolving to become wise. Current cutting-edge technologies include big data, Internet of Things, cloud computing, and artificial intelligence drive campus innovation, but there are still problems of unintuitive scenes, lagging monitoring information, untimely processing, and high operation and maintenance costs. Based on this, this study proposes the use of digital twin technology to digitally construct the physical campus scene, fully digitally represent it, accurately map the physical campus to the virtual campus with real-time sensing, and remotely control it to achieve the reverse control of the twin virtual campus to the physical campus. The research is guided by the theoretical model proposed by the digital twin technology, using UAV tilt photography and 3D modelling to collaboratively build the virtual campus scene. At the design stage, the interactive channel of the system is developed based on Unity3D to the realize real-time monitoring, decision making and prevention of dual spatial data. A design scheme of the spiral optimization system life cycle is formed. The modules of the smart campus system were evaluated using a system usability scale based on student experience. The experimental results show that the virtual-real campus system can enhance school management and teaching, providing important implications for promoting the development and application of campus intelligent systems

    Static Magnetic Field Inhibits Growth of Escherichia coli Colonies via Restriction of Carbon Source Utilization

    No full text
    Magnetobiological effects on growth and virulence have been widely reported in Escherichia coli (E. coli). However, published results are quite varied and sometimes conflicting because the underlying mechanism remains unknown. Here, we reported that the application of 250 mT static magnetic field (SMF) significantly reduces the diameter of E. coli colony-forming units (CFUs) but has no impact on the number of CFUs. Transcriptomic analysis revealed that the inhibitory effect of SMF is attributed to differentially expressed genes (DEGs) primarily involved in carbon source utilization. Consistently, the addition of glycolate or glyoxylate to the culture media successfully restores the bacterial phenotype in SMF, and knockout mutants lacking glycolate oxidase are no longer sensitive to SMF. These results suggest that SMF treatment results in a decrease in glycolate oxidase activity. In addition, metabolomic assay showed that long-chain fatty acids (LCFA) accumulate while phosphatidylglycerol and middle-chain fatty acids decrease in the SMF-treated bacteria, suggesting that SMF inhibits LCFA degradation. Based on the published evidence together with ours derived from this study, we propose a model showing that free radicals generated by LCFA degradation are the primary target of SMF action, which triggers the bacterial oxidative stress response and ultimately leads to growth inhibition

    Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images

    No full text
    Automated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local–global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT</p

    MAC-ResNet: Knowledge Distillation Based Lightweight Multiscale-Attention-Crop-ResNet for Eyelid Tumors Detection and Classification

    No full text
    Eyelid tumors are tumors that occur in the eye and its appendages, affecting vision and appearance, causing blindness and disability, and some having a high lethality rate. Pathological images of eyelid tumors are characterized by large pixels, multiple scales, and similar features. Solving the problem of difficult and time-consuming fine-grained classification of pathological images is important to improve the efficiency and quality of pathological diagnosis. The morphology of Basal Cell Carcinoma (BCC), Meibomian Gland Carcinoma (MGC), and Cutaneous Melanoma (CM) in eyelid tumors are very similar, and it is easy to be misdiagnosed among each category. In addition, the diseased area, which is decisive for the diagnosis of the disease, usually occupies only a relatively minor portion of the entire pathology section, and screening the area of interest is a tedious and time-consuming task. In this paper, deep learning techniques to investigate the pathological images of eyelid tumors. Inspired by the knowledge distillation process, we propose the Multiscale-Attention-Crop-ResNet (MAC-ResNet) network model to achieve the automatic classification of three malignant tumors and the automatic localization of whole slide imaging (WSI) lesion regions using U-Net. The final accuracy rates of the three classification problems of eyelid tumors on MAC-ResNet were 96.8%, 94.6%, and 90.8%, respectively
    corecore